AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Preference Optimization

# Preference Optimization

Gemma 2 9b It SimPO
MIT
Gemma 2.9B model fine-tuned on the gemma2-ultrafeedback-armorm dataset using the SimPO objective for preference optimization tasks
Large Language Model Transformers
G
princeton-nlp
21.34k
164
Llama 3 Instruct 8B SimPO
SimPO is a preference optimization method that eliminates the need for reference reward models, simplifying the traditional RLHF pipeline by directly optimizing language models with preference data.
Large Language Model Transformers
L
princeton-nlp
1,924
58
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase